texture representation
- Asia > China > Guangdong Province (0.05)
- Europe > United Kingdom (0.04)
Encoding Spatial Distribution of Convolutional Features for Texture Representation
However, GAP cannot well characterize complex distributive patterns of spatial features while such patterns play an important role in texture-oriented applications, e.g., material recognition and ground terrain classification. In the context of texture representation, this paper addressed the issue by proposing Fractal Encoding (FE), a feature encoding module grounded by multi-fractal geometry. Considering a CNN feature map as a union of level sets of points lying in the 2D space, FE characterizes their spatial layout via a local-global hierarchical fractal analysis which examines the multi-scale power behavior on each level set. This enables a CNN to encode the regularity on the spatial arrangement of image features, leading to a robust yet discriminative spectrum descriptor. In addition, FE has trainable parameters for data adaptivity and can be easily incorporated into existing CNNs for end-to-end training. We applied FE to ResNet-based texture classification and retrieval, and demonstrated its effectiveness on several benchmark datasets.
Texture Synthesis Using Convolutional Neural Networks
Leon Gatys, Alexander S. Ecker, Matthias Bethge
Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.
- Europe > Germany (0.06)
- North America > United States > Texas > Harris County > Houston (0.04)
- Asia > China > Guangdong Province (0.04)
- Europe > United Kingdom (0.04)
GraphTEN: Graph Enhanced Texture Encoding Network
Peng, Bo, Chen, Jintao, Yao, Mufeng, Zhang, Chenhao, Zhang, Jianghui, Chi, Mingmin, Tao, Jiang
Abstract--Texture recognition is a fundamental problem in computer vision and pattern recognition. Recent progress leverages feature aggregation into discriminative descriptions based on convolutional neural networks (CNNs). However, modeling non-local context relations through visual primitives remains challenging due to the variability and randomness of texture primitives in spatial distributions. Texture, as a fundamental visual attribute, encapsulates Building upon these foundations, recent research has continued the spatial organization of basic elements within texture-rich to advance texture representation and recognition by images, serving as a vital representation of the underlying exploring innovative perspectives. Textured regions are typically propose a learnable Gabor-based framework that integrates characterized by repetitive patterns with inherent variability, trainable statistical feature extractors with deep neural networks making them essential pre-attentive visual cues for comprehending to enhance fine-grained texture recognition.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > China > Shanghai > Shanghai (0.05)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- (3 more...)
Encoding Spatial Distribution of Convolutional Features for Texture Representation
However, GAP cannot well characterize complex distributive patterns of spatial features while such patterns play an important role in texture-oriented applications, e.g., material recognition and ground terrain classification. In the context of texture representation, this paper addressed the issue by proposing Fractal Encoding (FE), a feature encoding module grounded by multi-fractal geometry. Considering a CNN feature map as a union of level sets of points lying in the 2D space, FE characterizes their spatial layout via a local-global hierarchical fractal analysis which examines the multi-scale power behavior on each level set. This enables a CNN to encode the regularity on the spatial arrangement of image features, leading to a robust yet discriminative spectrum descriptor. In addition, FE has trainable parameters for data adaptivity and can be easily incorporated into existing CNNs for end-to-end training.
Texture Synthesis Using Convolutional Neural Networks
Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.06)
- North America > United States > Texas > Harris County > Houston (0.04)
- North America > United States > New York > New York County > New York City (0.04)
Self-Supervised Material and Texture Representation Learning for Remote Sensing Tasks
Akiva, Peri, Purri, Matthew, Leotta, Matthew
Self-supervised learning aims to learn image feature representations without the usage of manually annotated labels. It is often used as a precursor step to obtain useful initial network weights which contribute to faster convergence and superior performance of downstream tasks. While self-supervision allows one to reduce the domain gap between supervised and unsupervised learning without the usage of labels, the self-supervised objective still requires a strong inductive bias to downstream tasks for effective transfer learning. In this work, we present our material and texture based self-supervision method named MATTER (MATerial and TExture Representation Learning), which is inspired by classical material and texture methods. Material and texture can effectively describe any surface, including its tactile properties, color, and specularity. By extension, effective representation of material and texture can describe other semantic classes strongly associated with said material and texture. MATTER leverages multi-temporal, spatially aligned remote sensing imagery over unchanged regions to learn invariance to illumination and viewing angle as a mechanism to achieve consistency of material and texture representation. We show that our self-supervision pre-training method allows for up to 24.22% and 6.33% performance increase in unsupervised and fine-tuned setups, and up to 76% faster convergence on change detection, land cover classification, and semantic segmentation tasks.
- North America > United States > California > Alameda County > Oakland (0.04)
- Africa > Namibia (0.04)